Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep learning-based joint channel estimation and equalization algorithm for C-V2X communications
CHEN Chengrui, SUN Ning, HE Shibiao, LIAO Yong
Journal of Computer Applications    2021, 41 (9): 2687-2693.   DOI: 10.11772/j.issn.1001-9081.2020111779
Abstract378)      PDF (1086KB)(422)       Save
In order to effectively improve the Bit Error Rate (BER) performance of communication system without significantly increasing the computational complexity, a deep learning based joint channel estimation and equalization algorithm named V-EstEqNet was proposed for Cellular-Vehicle to Everything (C-V2X) communication system by using the powerful ability of deep learning in data processing. Different from the traditional algorithms, in which channel estimation and equalization in the communication system reciever were carried out in two stages respectively, V-EstEqNet considered them jointly, and used the deep learning network to directly correct and restore the received data, so that the channel equalization was completed without explicit channel estimation. Specifically, a large number of received data were used to train the network offline, so that the channel characteristics superimposed on the received data were learned by the network, and then these characteristics were utilized to recover the original transmitted data. Simulation results show that the proposed algorithm can track channel characteristics more effectively in different speed scenarios. At the same time, compared with the traditional channel estimation algorithms (Least Squares (LS) and Linear Minimum Mean Square Error (LMMSE)) combining with the traditional channel equalization algorithms (Zero Forcing (ZF) equalization algorithm and Minimum Mean Square Error (MMSE) equalization algorithm), the proposed algorithm has a maximum BER gain of 6 dB in low-speed environment and 9 dB in high-speed environment.
Reference | Related Articles | Metrics
CNN model compression based on activation-entropy based layer-wise iterative pruning strategy
CHEN Chengjun, MAO Yingchi, WANG Yichao
Journal of Computer Applications    2020, 40 (5): 1260-1265.   DOI: 10.11772/j.issn.1001-9081.2019111977
Abstract311)      PDF (718KB)(382)       Save

Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

Reference | Related Articles | Metrics
Analysis of international influence of news media for major social security emergencies
Chen CHEN, Shaowu ZHANG, Liang YANG, Dongyu ZHANG, Hongfei LIN
Journal of Computer Applications    2020, 40 (2): 524-529.   DOI: 10.11772/j.issn.1001-9081.2019091629
Abstract551)   HTML2)    PDF (1388KB)(264)       Save

Public opinions on major social security emergencies in the era of big data are mainly spread through the media. Most of the existing researches fail to consider the special group — news media and the influence of news media in a certain kind of specific events. In order to study the above problems, a method to evaluate the influence by integrating the network structure and behavioral relationship between users was proposed, and the Xinjiang and Paris violent and terrorist events were taken as examples to calculate the international influence of news media of different countries on such events on the Twitter platform. This evaluation method can better obtain the influence of various news media at the event level. By calculating the influence of news media in the violent and terrorist events in Xinjiang and Paris, the experimental results show that there are differences in the influence of news media of different countries in Xinjiang and Paris violent and terrorist events, which indicates that these two events of the same type have different influence scopes, and also reflects the differences of political positions of different countries.

Table and Figures | Reference | Related Articles | Metrics
Hybrid precoding scheme based on improved particle swarm optimization algorithm in mmWave massive MIMO system
LI Renmin, HUANG Jinsong, CHEN Chen, WU Junqin
Journal of Computer Applications    2018, 38 (8): 2365-2369.   DOI: 10.11772/j.issn.1001-9081.2017123026
Abstract777)      PDF (803KB)(391)       Save
To address the problem that the hybrid precoding scheme based on traditional Particle Swarm Optimization (PSO) algorithm in millimeter Wave (mmWave) massive Multi-Input Multi-Output (MIMO) systems has a low convergence speed and is easy to fall into the local optimal value in the later iteration, a hybrid precoding scheme based on improved PSO algorithm was proposed. Firstly, the particles' position vector and velocity vector were initialized randomly, and the initial swarm optimal position vector was given by maximizing the system sum rate. Secondly, the position vector and velocity vector were updated, and two updated particles' individual-historical-best position vectors were randomly selected to get their weighted sum as the new individual-historical-best position vector, and then some of particles that maximized the system sum rate were picked out. The weighted average value of the individual-historical-best position vectors of these particles was taken as the new swarm optimal position vector and compared with the previous one. After many iterations, the final swarm optimal position vector was formed, which was the desired best hybrid precoding vector. The simulation results show that compared with the hybrid precoding scheme based on traditional PSO algorithm, the proposed scheme is optimized both in terms of convergence speed and sum rate. The convergence speed of the proposed scheme is improved by 100%, and its performance can reach 90% of the full digital precoding scheme. Therefore, the proposed scheme can effectively improve system performance and accelerate convergence.
Reference | Related Articles | Metrics
Image super-resolution reconstruction based on four-channel convolutional sparse coding
CHEN Chen, ZHAO Jianwei, CAO Feilong
Journal of Computer Applications    2018, 38 (6): 1777-1783.   DOI: 10.11772/j.issn.1001-9081.2017112742
Abstract327)      PDF (1085KB)(303)       Save
In order to solve the problem of low resolution of iamge, a new image super-resolution reconstruction method based on four-channel convolutional sparse coding was proposed. Firstly, the input image was turned over 90° in turn as the input of four channels, and an input image was decomposed into the high frequency part and the low frequency part by low pass filter and gradient operator. Then, the high frequency part and low frequency part of the low resolution image in each channel were reconstructed by convolutional sparse coding method and cubic interpolation method respectively. Finally, the four-channel output images were weighted for mean to obtain the reconstructed high resolution image. The experimental results show that the proposed method has better reconstruction effect than some classical super-resolution methods in Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM) and noise immunity. The proposed method can not only overcome the shortcoming of consistency between image patches destroyed by overlapping patches, but also improve the detail contour of reconstructed image, and enhance the stability of reconstructed image.
Reference | Related Articles | Metrics
Traffic scheduling strategy based on improved Dijkstra algorithm for power distribution and utilization communication network
XIANG Min, CHEN Cheng
Journal of Computer Applications    2018, 38 (6): 1715-1720.   DOI: 10.11772/j.issn.1001-9081.2017112825
Abstract379)      PDF (939KB)(375)       Save
Concerning the problem of being easy to generate congestion during data aggregation in power distribution and utilization communication network, a novel hybrid edge-weighted traffic scheduling and routing algorithm was proposed. Firstly, the hierarchical node model was established according to the number of hops. Then, the priorities of power distribution and utilization services and node congestion levels were divided. Finally, the edge weights were calculated based on the comprehensive index of hop number, traffic load rate and link utilization ratio. The nodes of traffic scheduling were needed for routing selection according to the improved Dijkstra algorithm, and the severe congestion nodes were also scheduled in accordance with the priorities of power distribution and utilization services. Compared with Shortest Path Fast (SPF) algorithm and Greedy Backpressure Routing Algorithm (GBRA), when the data generation rate is 80 kb/s, the packet loss rate of emergency service by using the proposed algorithm is reduced by 81.3% and 67.7% respectively, and the packet loss rate of key service is reduced by 79% and 63.8% respectively. The simulation results show that, the proposed algorithm can effectively alleviate network congestion, improve the effective throughput of network, reduce the end-to-end delay of network and the packet loss rate of high priority service.
Reference | Related Articles | Metrics
Improved differential fault attack on scalar multiplication algorithm in elliptic curve cryptosystem
XU Shengwei, CHEN Cheng, WANG Rongrong
Journal of Computer Applications    2016, 36 (12): 3328-3332.   DOI: 10.11772/j.issn.1001-9081.2016.12.3328
Abstract744)      PDF (785KB)(498)       Save
Concerning the failure problem of fault attack on elliptic curve scalar multiplication algorithm, an improved algorithm of differential fault attack was proposed. The nonzero assumption was eliminated, and an authentication mechanism was imported against the failure threat of "fault detection". Using the elliptic curve provided by SM2 algorithm, the binary scalar multiplication algorithm, binary Non-Adjacent Form (NAF) scalar multiplication algorithm and Montgomery scalar multiplication algorithm were successfully attacked with software simulation. The 256-bit private key was restored in three hours. The attacking process of binary NAF scalar multiplication algorithm was optimized, so the attack time was reduced to one fifth of the original one. The experimental results show that the proposed algorithm can improve the effectiveness of the attack.
Reference | Related Articles | Metrics
Action recognition based on depth images and skeleton data
LU Zhongqiu, HOU Zhenjie, CHEN Chen, LIANG Jiuzhen
Journal of Computer Applications    2016, 36 (11): 2979-2984.   DOI: 10.11772/j.issn.1001-9081.2016.11.2979
Abstract1004)      PDF (1010KB)(871)       Save
In order to make full use of depth images and skeleton data for action detection, a multi-feature human action recognition method based on depth images and skeleton data was proposed. Multi-features included Depth Motion Map (DMM) feature and Quadruples skeletal feature (Quad). In aspect of depth images, DMM could be captured by projecting the depth image onto the three plane of a Descartes coordinate system. In aspect of skeleton data, Quad was a kind of calibration method for skeleton features and the results were only related to the skeleton posture. Meanwhile, a strategy of multi-model probabilistic voting model was proposed to reduce the influence from noise data on the classification. The proposed method was evaluated on Microsoft Research Action 3D dataset and Depth-included Human Action (DHA) database. The results indicate that the method has high accuracy and good robustness.
Reference | Related Articles | Metrics
PageRank parallel algorithm based on Web link classification
CHEN Cheng, ZHAN Yinwei, LI Ying
Journal of Computer Applications    2015, 35 (1): 48-52.   DOI: 10.11772/j.issn.1001-9081.2015.01.0048
Abstract871)      PDF (740KB)(683)       Save

Concerning the problem that the efficiency of serial PageRank algorithm is low in dealing with mass Web data, a PageRank parallel algorithm based on Web link classification was proposed. Firstly, the Web was classified according to its Web link, and the weights of different Web which was from diverse websites were set variously. Secondly, with the Hadoop parallel computation platform and MapReduce which has the characteristics of dividing and conquering, the Webpage ranks were computed parallel. At last, a data compression method of three layers including data layer, pretreatment layer and computation layer was adopted to optimize the parallel algorithm. The experimental results show that, compared with the serial PageRank algorithm, the accuracy of the proposed algorithm is improved by 12% and the efficiency is improved by 33% in the best case.

Reference | Related Articles | Metrics
Hybrid emulation test method for large scale mobile Ad Hoc network
GUO Yichen CHEN Jing ZHANG Li HUANG Conghui
Journal of Computer Applications    2013, 33 (01): 101-104.   DOI: 10.3724/SP.J.1087.2013.00101
Abstract871)      PDF (633KB)(581)       Save
The current disadvantages of Mobile Ad Hoc Network (MANET) test method include simple model, high cost and difficult duplicate. In this paper, a large-scale MANET hybrid emulation testing method based on NS2 (LHEN) was proposed to solve these problems. By making use of simulation function of NS2, the authors could complete encapsulation and decapsulation of real packets and virtual packets with Tap agent, thus they could achieve the communication between virtual environment and real environment through network objects and NS2 real-time scheduler. A real node movement could be emulated by controlling network wireless signal strength, and then a real network environment was established. Finally, the authors constructed a large scale MANET respectively for contrast experiments through method of hybrid emulation and simulation. The experimental results show that the performance is almost consistent and mean difference value is lower than 18.7%, which means LHEN is able to be applied in some indicators test and verification for a large scale MANET.
Reference | Related Articles | Metrics
DPST: a scheduling algorithm of preventing slow task thrashing in heterogeneous environment
DUAN Han-cong LI Jun-jie CHEN Cheng LI Lin
Journal of Computer Applications    2012, 32 (07): 1910-1912.   DOI: 10.3724/SP.J.1087.2012.01910
Abstract958)      PDF (617KB)(625)       Save
With regard to the thrashing problem of load-balancing algorithm in heterogeneous environments, a new scheduling algorithm called Dynamic Predetermination of Slow Task (DPST) was designed to reduce the probability in slow task scheduling and improve load-balancing. Through defining capability measure of heterogeneous task in heterogeneous nodes, the capacity of nodes which performed heterogeneous tasks was normalized. With the introduction of predetermination, thrashing result from heterogeneous environments was reduced. By using double queues of slow task and slow node, the efficiency of scheduling was improved. The experimental results show that the thrashing times in heterogeneous environments fell by more than 40% compared with Hadoop. Because thrashing times have been reduced effectively, DPST algorithm has better performance in reducing average response time and increasing system throughput in heterogeneous environments.
Reference | Related Articles | Metrics
Application of exposure fusion to single image dehazing
CHEN Chen HU Shi-qiang ZHANG Jun
Journal of Computer Applications    2012, 32 (01): 241-244.   DOI: 10.3724/SP.J.1087.2012.00241
Abstract1158)      PDF (723KB)(652)       Save
This paper proposed a simple and effective method to remove haze from a single input image degraded by bad weather. First, it estimated the airlight by using dark channel prior knowledge. Then, under a mathematical model for describing the formation of a hazy image, the depth of field of each pixel was sampled to get a virtual haze-free image sequence. Finally, an exposure criterion introduced by exposure fusion algorithm was used to extract a haze-free image from the image sequence by multi-scale fashion image fusion method. The experimental results show that the proposed method can yield good results and it is appropriate for real-time applications.
Reference | Related Articles | Metrics
New secrete information hiding method for remote sensing images
Qin YUE Chen CHEN
Journal of Computer Applications    2009, 29 (11): 2977-2979.  
Abstract1493)      PDF (917KB)(1290)       Save
To prevent the leak of secrete information hidden in remote sensing image, a sample-based method for image inpainting was used in hiding secrete information in remote sensing image. The method selected the maximum region similar to secrete information image, and covered the secrete information block, thus hiding the secrete information. The method had not only strong invisibility, but also little influence on the use of remote sensing image. With reference to digital watermarking, whether users can see secrete information or not according to their authority status was realized, which facilitated the use of remote sensing image. Experimental results show that the method is feasible, and it improves the application security of remote sensing image.
Related Articles | Metrics
Software model checking based on hierarchical unit partition
Chen CHEN Yong-Sheng CHEN
Journal of Computer Applications   
Abstract1459)      PDF (559KB)(1030)       Save
This paper reviewed some prevalent trends in this domain in recent years, then proposed a software model checking scenario, which was based on hierarchical unit partition and heuristic search. It has three phases, which are preprocess, unit partition and state space search. There is on-the-fly method in this scenario to improve the performance of model checking. Experiments prove that this model checking scenario works well on solving state explosion problem.
Related Articles | Metrics